課程資訊
課程名稱
神經網路
Neural Networks 
開課學期
111-2 
授課對象
醫學院  腦與心智科學研究所  
授課教師
吳恩賜 
課號
GIBMS7015 
課程識別碼
454EM0390 
班次
 
學分
3.0 
全/半年
半年 
必/選修
選修 
上課時間
星期五6,7,8(13:20~16:20) 
上課地點
基1203 
備註
本課程以英語授課。
限碩士班以上 且 限本系所學生(含輔系、雙修生)
總人數上限:15人 
 
課程簡介影片
 
核心能力關聯
核心能力與課程規劃關聯圖
課程大綱
為確保您我的權利,請尊重智慧財產權及不得非法影印
課程概述

This course will introduce basic principles of neural networks in relation to human cognition with applied practical programming of simple neural networks. Students will read three modeling papers and apply the neural network models in these papers to create their own neural networks, in addition to regular class assignments. Four examples of network implementation will be covered: 1) Basic Perceptron, 2) Attractors (Hopfield, 1982), 2) Backpropagation (Multi-Layered Perceptron; Rumelhart et al., 1986), 3) Unsupervised Learning (Von Der Malsburg, 1973).

Assignments: In addition to homework to aid understanding, there will be three greater course assignments which are to program the above three neural networks using any of the above software languages and apply the neural networks to real-life problems or simulations of human cognition.

Grading: Students will be graded on the quality of their assignments in terms of model success and comprehensiveness of evaluating the models to exemplify a real cognitive phenomenon. Homeworks where given will count towards bonus credit. 

課程目標
a). To learn the basic principles of how neural network models work. b) To make one's own simple neural networks. c) To learn how to evaluate neural network models. 
課程要求
Students in Graduate Institute of Brain and Mind Sciences; confidence in computer programming; Jupyter Notebook (https://jupyter.org/) or R (https://www.r-project.org/; with R Studio, https://www.rstudio.com/) are free to download and install, and recommended for use in the modeling work to be done in this course. The course will use python and R code for different models. Officially, there will be no auditing unless very special reason is given; your own computer with above softwares installed and ready to go. Other interested students will be considered on a case-by-case basis. 
預期每週課後學習時數
A person with average coding profiency might expect to spend about 24 hrs/wk outside of class time to complete the assignments. 
Office Hours
 
指定閱讀
1. Jordan M. I. (1986). An introduction to linear algebra in parallel distributed processing. In Parallel distributed processing: explorations in the microstructure of cognition, vol. 1: foundations. Ed. David E. Rumelhart & James L. McLelland, p365–422, MIT Press, Cambridge: USA. pdf

2. Neural Networks and Deep Learning: A Textbook. (2018). Charu C. Aggarwal, Springer, Cham, Switzerland. Ch. 1. pdf

3. Hopfield, J. J. (1982). Neural networks and physical systems with emergent collective computational abilities. Proceedings of the National Academy of Sciences USA, 79, 2554-2558. pdf

4. Rumelhart, D., Hinton, G., & Williams, R. (1986). Learning internal representations by error propagation. MIT Press Cambridge, MA, USA. pdf

5. Von Der Malsburg, C. (1973). Self-organization of orientation sensitive cells in the striate cortex. Kybernetik, 14, 85-100. pdf 
參考書目
The whole book "Neural Networks and Deep Learning: A Textbook. (2018). Charu C. Aggarwal, Springer, Cham, Switzerland" is a useful resource for this course and neural network modeling in general. 
評量方式
(僅供參考)
   
課程進度
週次
日期
單元主題
Week 1
2023/2/24  Introduction: Biology, why model, and general approach. 
Week 2
2023/3/3  Linear algebra: Vectors, matrices, and matrix operations; reading Jordan (1986). [HW 1] 
Week 3
2023/3/10  Perceptrons: Nomenclature, general neural network framework, application in logic problems; reading Aggarwal (2018), Ch. 1. [HW 2] 
Week 4
2023/3/17  Attractor networks 1: Introduction to the principles of autoencoding and memory; reading Hopfield (1982). 
Week 5
2023/3/24  Attractor networks 2: Simple autoencoder architecture and learning rule to instantiate content-addressable memory, attractor properties. 
Week 6
2023/3/31  Attractor networks 3: Evaluating and describing the autoencoder model. [Assignment 1] 
Week 7
2023/4/7  Backpropagation 1: Introduction to the principles of multi-layered perceptrons and error-based learning; reading Rumelhart et al. (1986). 
Week 8
2023/4/14  Backpropagation 2: Simple multi-layered perceptron to instantiate error-based learning, non-linear input-output mappings. 
Week 9
2023/4/21  Backpropagation 3: Evaluating and describing the multi-layered perceptron model. [Assignment 2] 
Week 10
2023/4/28  Unsupervised learning 1: Introduction to the principles of functional self-organization and convolution in V1 orientation selectivity; reading Von der Malsburg (1973). 
Week 11
2023/5/5  Unsupervised learning 2: Unpacking the neural network model in Von der Malsburg (1973). 
Week 12
2023/5/12  Unsupervised Learning 3: Evaluating the Von der Malsburg model. [Assignment 3] 
Week 13
2023/5/19  Recurrent neural networks; reading Aggarwal (2018), Ch. 2. 
Week 14
2023/5/26  Convolutional neural networks; reading Aggarwal (2018), Ch. 3. 
Week 15
2023/6/2  Exploding and diminishing gradients, overfitting, regularization. 
Week 16
2023/6/9  End-of-term exams (no class).